...more recent posts
Been wanting to post some more here without too much luck. Giving it another try. Warning: no proof reading or anything, just unedited ramblings ahead.
I am just finishing up my second big project for my new business. It has been a very slow process to get the whole thing (the business) off the ground, but that is mostly because I am having to learn so many new things. None of it is particularly hard, per se, it's just having to pay attention to everything at the same time. I'm not used to that. Being the only employee is great, and maybe the only way I can work happily, but the downside is just that I end up doing everything, and to put it mildly, I'm not good at everything.
It would probably be more efficient for me to work for another company that is run by people who, you know, actually know how to run a company. But that doesn't seem as fun. So I'm struggling along by myself, probably making a lot of stupid mistakes, and probably spending too much time reinventing wheels. But at least they are my mistakes and my wheels.
Building websites is pretty easy. Building websites that can be maintained and extended through time is more difficult. And building many websites that can be maintained and extended by just one person is even more difficult. This is where almost all my thinking goes.
Standardization is the key. I have spent a lot of time building the tools that will let me build websites. I could probably just code each site by hand in less time than it is taking me to develop an automated approach, but the problem is that once you have dozens of sites built, if they are all running on different code you have hacked together it is a nightmare to maintain. So what I've been aiming at is having just one very flexible code base that runs all the sites, and then as upgrades and bug fixes are made to that code they are automatically rolled out to all the sites.
And I've done pretty well with this. The setup is actually pretty extreme. Here's the quick technical rundown:
OMG. Nokia n95. Of course it will take forever for that to hit the U.S., and possible specs have already leaked for the even more insane n97. I want that.
I'm happy about this news from Tim Berners-Lee concerning the future path(s) of HTML development:
Some things are clearer with hindsight of several years. It is necessary to evolve HTML incrementally. The attempt to get the world to switch to XML, including quotes around attribute values and slashes in empty tags and namespaces all at once didn't work. The large HTML-generating public did not move, largely because the browsers didn't complain. Some large communities did shift and are enjoying the fruits of well-formed systems, but not all. It is important to maintain HTML incrementally, as well as continuing a transition to well-formed world, and developing more power in that world.This is smart. If we could have all jumped to XML at the same time maybe that would be better, but it didn't happen and it's not going to suddenly happen, so we need an easier way to go forward. Perhaps the greatest thing about HTML is how dead simple it is to get started with it. Of course that is also the worst thing about it as well (in the sense that it doesn't enforce itself rigorously so while it's easy to get going it's also easy to fly out of control with it and create truly horrible markup.) Still, I think it is better to stay more on the loose HTML side rather than the strict XML side as we go forward. Maybe not in a theoretical sense, but definitely in a practical sense.
Now just get us some more powerful form elements!
Crazy small foldable computer from Samsung: SPH-P9000. Who names these things?
I'm not a big Adobe fan, and I've always been against Flash if there are any other ways to get the job done, but it does seem like there are a lot of interesting things going on. As the web moves towards a more richly interactive design (AJAXified web 2.0 stuff) maybe Flash starts to make more sense?
In any case, Adobe just made a huge contribution to the Mozilla project that is scheduled to pay off sometime in 2008: Tamarin Project. From the blog of one of the engineers:
Today Adobe announced that the EMCAScript 4 compatible virtual machine in the Adobe Flash Player has been contributed to the Mozilla project under the name Tamarin. It is the single largest contribution to the Mozilla foundation since its inception and consist of about 135.000 lines of source code. The engine is fully open source using the standard Mozilla license, with the Mozilla foundation retaining full ownership.Tamarin will allow for Mozilla (and therefore Firefox) to easily move to javascript 2. And while I'm always a little nervous about Flash (and javascript for that matter) I'm also getting impatient for more powerful scripting tools and this is probably the way we are going to get them. So I'm staying hopeful and will watch this.
Adobe's project Apollo:
Apollo is the code name for a cross-operating system runtime being developed by Adobe that allows developers to leverage their existing web development skills (Flash, Flex, HTML, JavaScript, Ajax) to build and deploy Rich Internet Applications (RIAs) to the desktop.In other words: Flash apps you can download and run on your desktop. I am surprised they didn't do this sooner. Supposed to be ready middle of next year.
I've been way deep in the code for the past few weeks. Just got a client launched yesterday, and I'm still sort of in the thick of it, but it was a nice (if tiring) opportunity to really get everything polished up. Like I always say, it's that last 2 percent that takes most of the time.
But even though this job went okay, it took me too long and now I basically have to start right in on the next. I guess that is a good problem to have. They should be much easier from here given the much better state of my program. Although I guess that's what I always say. Still, I think it might finally be true this time.
Great advanced MySQL blog.
Finally: mod_auth_token Apache module. This is supposedly similar to LIGHTTPD's mod_secdownload. I think this will keep me from switching to LIGHTTPD as a web server. The way I'm handling it now is a bit convoluted (although I'm sort of glad I got it to work.) The issue is that you want your application logic (PHP for me) to do authentification, but you don't want it to serve large binary files (with fpassthrough or whatever) because that is really inefficient compared to just having Apache do it (without PHP.) This takes care of the problem by allowing you to generate a token in PHP (an md5 of the current timestamp and a 'secret') and then pass this token to Apache along with the file request. If the timestamp is new enough Apache will serve the file. I'm doing a similar thing by hand now - keeping the files outside the web root and then creating symlinks with PHP which I then erase on subsequent requests to the system. It works fine, but makes for some complex code that mod_auth_token will greatly simplify.
Now my only module wish is something to throttle traffic (I use mod_cband now since I'm running Apache 2) but allow for certain file extensions (or maybe all files served from specified directories) to burst to x kb/s for the first n bytes of the file. In other words, I'd like to be able to specify a max kb/s on a per virtual host basis (mod_cband does this perfectly,) but then to further allow for that limit to be bypassed for the first x bytes of particular files. The point is to facilitate fast and stutter free streaming starts. Maybe you would throttle bandwidth at 1 mb/s for a virtual host, but for .mp3 files you have it burst to 3 mb/s for the first 100 kb of the file (and then slow down to 1 mb/s for the rest of the file.)
Maybe I'll try to get in touch with the mod_cband guy. A little bit arcane, but I think people would like it.
Interesting, super geeky look at BigTable, Google's in house developed storage framework.